4 research outputs found

    LSST Science Data Quality Analysis Subsystem Design

    Get PDF
    The Large Synoptic Survey Telescope (LSST) will have a Science Data Quality Analysis (SDQA) subsystem for vetting its unprecedented volume of astronomical image data. The SDQA subsystem inhabits three basic realms: image processing, graphical-user-interface (GUI) tools, and alarms/reporting. During pipeline image processing, SDQA data are computed for the images and astronomical sources extracted from the images, and utilized to grade the images and sources. Alarms are automatically sent, if necessary, to initiate swift response to problems found. Both SDQA data and machine-determined grades are stored in a database. At the end of a data-processing interval, e.g., nightly processing or data-release reprocessing, automatic SDQA reports are generated from SDQA data and grades queried from the database. The SDQA reports summarize the science data quality and provide feedback to telescope, camera, facility, observation-scheduling and data-processing personnel. During operations, GUI tools facilitate visualization of image and SDQA data in a variety of ways that allow a small SDQA-operations team of humans to quickly and easily perform manual SDQA on a substantial fraction of LSST data products, and possibly reassign SDQA grades as a result of the visual inspection

    Science data quality assessment for the Large Synoptic Survey Telescope

    Get PDF
    LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with expectations from prior experience and models, and with established requirements for the survey. While analogous systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands. Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims

    Spitzer Warm Mission Transition and Operations

    Get PDF
    Following the successful dynamic planning and implementation of IRAC Warm Instrument Characterization activities, transition to Spitzer Warm Mission operations has gone smoothly. Operation teams procedures and processes required minimal adaptation and the overall composition of the Mission Operation System retained the same functionality it had during the Cryogenic Mission. While the warm mission scheduling has been simplified because all observations are now being made with a single instrument, several other differences have increased the complexity. The bulk of the observations executed to date have been from ten large Exploration Science programs that, combined, have more complex constraints, more observing requests, and more exo-planet observations with durations of up to 145 hours. Communication with the observatory is also becoming more challenging as the Spitzer DSN antenna allocations have been reduced from two tracking passes per day to a single pass impacting both uplink and downlink activities. While IRAC is now operating with only two channels, the data collection rate is roughly 60% of the four-channel rate leaving a somewhat higher average volume collected between the less frequent passes. Also, the maximum downlink data rate is decreasing as the distance to Spitzer increases requiring longer passes. Nevertheless, with well over 90% of the time spent on science observations, efficiency has equaled or exceeded that achieved during the cryogenic mission

    Science data quality assessment for the Large Synoptic Survey Telescope

    Get PDF
    LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with expectations from prior experience and models, and with established requirements for the survey. While analogous systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands. Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims
    corecore